Skip to content

ck_tile grouped gemm: more padding#574

Draft
matthiasdiener wants to merge 1 commit into
devfrom
mdiener/cktile-grouped-gemm-padding
Draft

ck_tile grouped gemm: more padding#574
matthiasdiener wants to merge 1 commit into
devfrom
mdiener/cktile-grouped-gemm-padding

Conversation

@matthiasdiener
Copy link
Copy Markdown
Contributor

@matthiasdiener matthiasdiener commented May 5, 2026

Description

Please include a brief summary of the changes, relevant motivation and context.

Fixes # (issue)

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Change A
  • Change B

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

@matthiasdiener matthiasdiener requested a review from sudhu2k May 5, 2026 00:15
@matthiasdiener matthiasdiener self-assigned this May 5, 2026
@matthiasdiener matthiasdiener requested review from aris134 May 5, 2026 15:36
Comment on lines +249 to +251
if (need_k_pad && ctx.transB) {
return false;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming we have access to B column-wise storage, could we avoid this fallback by selecting B’s column-wise buffer and calling CK with transB=false, while preserving the same logical GEMM? In other words, is the incorrect-result issue specific to CK’s kPadK + transB=true / ColMajor-B path, or would the columnwise-buffer normalization still hit the same underlying issue?

". Falling back. "
"CK_Tile constraints for bf16/fp16: "
"contiguous dim of A and B must be dword-aligned (even), "
"N must be multiple of 16 (GetVectorSizeC).");
Copy link
Copy Markdown
Contributor

@aris134 aris134 May 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this function (GetVectorSizeC) implies that N must be a multiple of 16. For the M16N16K32 warp-gemm path, the relevant attributes appear to be kN = 16 and kCNLane = 16, so in the row-major non-TransposeC case this returns kCNLane / kN = 1. That seems to describe the per-thread contiguous C vector size rather than an N divisibility requirement. Is there another place where the N % 16 == 0 constraint is enforced or assumed?

reason="Only enable CUTLASS/CK grouped gemm on Hopper or ROCm",
)
@pytest.mark.parametrize("dtype", [torch.bfloat16, torch.float16], ids=str)
@pytest.mark.parametrize("layout", ["TN", "NN"])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NT?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh perhaps the reason NT is left out has to do with your comment about kPadK + column major B

Comment on lines +3146 to +3147
if pad_dim == "K":
gemm_k = unaligned_k
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test seems to cover one unaligned dimension at a time, but not the combined kPadM && kPadK case. Since the dispatch logic can instantiate a runner with padding set for both dimensions, should we add a case where both M and K are unaligned?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants